Central Macedonia
Generating Spatial Synthetic Populations Using Wasserstein Generative Adversarial Network: A Case Study with EU-SILC Data for Helsinki and Thessaloniki
Using agent-based social simulations can enhance our understanding of urban planning, public health, and economic forecasting. Realistic synthetic populations with numerous attributes strengthen these simulations. The Wasserstein Generative Adversarial Network, trained on census data like EU-SILC, can create robust synthetic populations. These methods, aided by external statistics or EU-SILC weights, generate spatial synthetic populations for agent-based models. The increased access to high-quality micro-data has sparked interest in synthetic populations, which preserve demographic profiles and analytical strength while ensuring privacy and preventing discrimination. This study uses national data from Finland and Greece for Helsinki and Thessaloniki to explore balanced spatial synthetic population generation. Results show challenges related to balancing data with or without aggregated statistics for the target population and the general under-representation of fringe profiles by deep generative methods. The latter can lead to discrimination in agent-based simulations.
Multi-scale Intervention Planning based on Generative Design
Kavouras, Ioannis, Rallis, Ioannis, Sardis, Emmanuel, Protopapadakis, Eftychios, Doulamis, Anastasios, Doulamis, Nikolaos
The scarcity of green spaces, in urban environments, consists a critical challenge. There are multiple adverse effects, impacting the health and well-being of the citizens. Small scale interventions, e.g. pocket parks, is a viable solution, but comes with multiple constraints, involving the design and implementation over a specific area. In this study, we harness the capabilities of generative AI for multi-scale intervention planning, focusing on nature based solutions. By leveraging image-to-image and image inpainting algorithms, we propose a methodology to address the green space deficit in urban areas. Focusing on two alleys in Thessaloniki, where greenery is lacking, we demonstrate the efficacy of our approach in visualizing NBS interventions. Our findings underscore the transformative potential of emerging technologies in shaping the future of urban intervention planning processes.
Unsupervised Interpretable Basis Extraction for Concept-Based Visual Explanations
Doumanoglou, Alexandros, Asteriadis, Stylianos, Zarpalas, Dimitrios
Abstract--An important line of research attempts to explain CNN image classifier predictions and intermediate layer representations in terms of human understandable concepts. In this work, we expand on previous works in the literature that use annotated concept datasets to extract interpretable feature space directions and propose an unsupervised post-hoc method to extract a disentangling interpretable basis by looking for the rotation of the feature space that explains sparse one-hot thresholded transformed representations of pixel activations. They can be used in robotics, visual understanding, automatic risk assessment and more. However, to a human expert, CNNs are often black-boxes and the reasoning behind their predictions can be unclear. Beyond this early result, more recently, rigorous experimentation showed that linear separability of features corresponding to different semantic concepts, increases towards the top layer [6]. The latter has been attributed to the top layer's linearity and the fact that intermediate layers are enforced to A. Doumanoglou is with the Information Technologies Institute (ITI), Centre for Research and Technology HELLAS (CERTH), Thessaloniki, D. Zarpalas is with the Information Technologies Institute (ITI), Centre for Figure 1: Left: In a standard convolution layer with D filters, all the filters work together to transform each input patch to a feature vector of spatial dimensionality 1 1. Thus the dimensionality of the feature space equals the number of filters in the layer, and each spatial element of the transformed representation, constitutes a sample in this feature space. Middle: To find an interpretable basis in the aforementioned feature space in a supervised way, it means to train a set of linear classifiers (concept detectors), one for each interpretable concept, by using feature vectors corresponding to image patches containing the concept. We observe, that in a successfully learned interpretable basis, a single pixel is classified positively by at most one classifier, among a group of classifiers that are trained to detect mutually-exclusive concepts. Projecting new, transformed sparse representation.
Does Noise Affect Housing Prices? A Case Study in the Urban Area of Thessaloniki
Kamtziridis, Georgios, Vrakas, Dimitris, Tsoumakas, Grigorios
Real estate markets depend on various methods to predict housing prices, including models that have been trained on datasets of residential or commercial properties. Most studies endeavor to create more accurate machine learning models by utilizing data such as basic property characteristics as well as urban features like distances from amenities and road accessibility. Even though environmental factors like noise pollution can potentially affect prices, the research around this topic is limited. One of the reasons is the lack of data. In this paper, we reconstruct and make publicly available a general purpose noise pollution dataset based on published studies conducted by the Hellenic Ministry of Environment and Energy for the city of Thessaloniki, Greece. Then, we train ensemble machine learning models, like XGBoost, on property data for different areas of Thessaloniki to investigate the way noise influences prices through interpretability evaluation techniques. Our study provides a new noise pollution dataset that not only demonstrates the impact noise has on housing prices, but also indicates that the influence of noise on prices significantly varies among different areas of the same city.
Free data science ebooks for June 2019
The book is about quickly entering the world of creating machine learning models in R. The theory is kept to minimum and there are examples for each of the major algorithms for classification, clustering, features engineering and association rules. The book is a compilation of the leaflets the authors give to their students during the practice labs, in the courses of Pattern Recognition and Data Mining, in the Electrical and Computer Engineering Department of the Aristotle University of Thessaloniki.
Examining Deep Learning Architectures for Crime Classification and Prediction
Stalidis, Panagiotis, Semertzidis, Theodoros, Daras, Petros
Abstract--In this paper, a detailed study on crime classification and prediction using deep learning architectures is presented. We examine the effectiveness of deep learning algorithms on this domain and provide recommendations for designing and training deep learning systems for predicting crime areas, using open data from police reports. Having as training data time-series of crime types per location, a comparative study of 10 state-of-the-art methods against 3 different deep learning configurations is conducted. In our experiments with five publicly available datasets, we demonstrate that the deep learning-based methods consistently outperform the existing best-performing methods. Moreover, we evaluate the effectiveness of different parameters in the deep learning architectures and give insights for configuring them in order to achieve improved performance in crime classification and finally crime prediction. REDICTIVE policing is the use of analytical techniques to identify either likely places of future crime scenes or past crime perpetrators, by applying statistical predictions [29]. As a crime typically involves a perpetrator and a target and occurs at a certain place and time, techniques of predictive policing need to answer: a) who will commit a crime, b) who will be offended, c) what type of crime, d) in which location and e) at what time a new crime will take place. This work does not focus on the victim and the offender, but on the prediction of occurrence of a certain crime type per location and time using past data. The ultimate goal, in a policing context, is the selection of the top areas in the city for the prioritization of law enforcement resources per department. One of the most challenging issues of police departments is to have accurate crime forecasts to dynamically deploy patrols and other resources so as to improve deterring of crime occurrence and police response times. Routine activity theory [8] suggests that most crimes take place when three conditions are met: a motivated offender, a suitable victim and lack of victim protection. The rational choice theory [9], suggests that prospective criminal weights the gain of successfully committing the crime against the probability of being caught and makes a rational choice whether to actually commit the crime or not. Both theories agree that a crime takes place when a person willing to commit it has an opportunity to do so. Daras are with the Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece.
When nation-states fail, moderate voices are silenced
The image of the angry man holding a little girl in one arm while violently abusing Yiannis Boutaris, the 75-year-old mayor of Thessaloniki with the other, shocked Greeks. Boutaris was attacked by a crowd at a Sunday commemoration of what is known as the genocide of the Pontians, a Christian ethnic group from the highlands of the southern Black Sea speaking a dialect of Greek, who escaped Ottoman Turkish persecution and emigrated to the newly-formed Greek nation-state. It is the latest in a series of violent attacks on Greek politicians by a public expressing outrage and impotence at collapsing living standards. But Boutaris, the tattooed septuagenarian ecologist and scion of a Vlach winemaking family, is not a typical representative of the establishment politicians Greeks blame for imposing increasing levels of austerity on a fractured society. The twice-elected mayor of Thessaloniki is a resolute cosmopolitan who worked to collapse the walls between Greece and Turkey, free his city from the segregating Greek creation myth, and make its troubled past more inclusive of former resident minorities, including Jews and Turks.
Fact and Fiction Behind the Threat of 'Killer AI'
However, Oren Etzioni, professor of Computer Science at the University of Washington and CEO of the Allen Institute for Artificial Intelligence, argues that such headlines are in fact strongly influenced by the work of one man: professor Nick Bostrom of the Faculty of Philosophy at Oxford University, author of the bestselling treatise Superintelligence: Paths, Dangers, and Strategies. Essentially, Bostrom claims that if machine brains surpass human brains in general intelligence, the resultant new'superintelligence' could replace humans as the dominant lifeform on Earth. Furthermore, according to his findings, there's a 10-percent probability that human-level AI will be attained by 2022, a 50-percent probability that this feat will be achieved by 2040, and 90-percent probability that such an entity will be created by 2075. However, in his article published in the MIT Technology Review magazine Etzioni points out that Bostrom's main source of data is an aggregate of four different surveys of groups, including participants of the Philosophy and Theory of AI conference that was held in 2011 in Thessaloniki, and members of the Greek Association for Artificial Intelligence. Furthermore, it appears that Bostrom didn't provide the response rates or the phrasing of questions used during those surveys, and neither did he account for the reliance on data collected in Greece.
The Third Competition on Knowledge Engineering for Planning and Scheduling
Bartak, Roman (Charles University) | Fratini, Simone (Italian National Research Council) | McCluskey, Lee (University of Huddersfield)
We report on the staging of the third competition on knowledge engineering for AI planning and scheduling systems, held during ICAPS-09 at Thessaloniki, Greece in September 2009. We give an overview of how the competition has developed since its first run in 2005, and its relationship with the AI planning field. This run of the competition focused on translators that when input with some formal description in an application-area-specific language, output solver-ready domain models. Despite a fairly narrow focus within knowledge engineering, seven teams took part in what turned out to be a very interesting and successful competition.